Goto

Collaborating Authors

 new ai model


NASA's new AI model can predict when a solar storm may strike

MIT Technology Review

There's no way to prevent these sorts of effects, but being able to predict when a large solar flare will occur could let people work around them. However, as Louise Harra, an astrophysicist at ETH Zurich, puts it, "when it erupts is always the sticking point." Scientists can easily tell from an image of the sun if there will be a solar flare in the near future, says Harra, who did not work on Surya. But knowing the exact timing and strength of a flare is much harder, she says. That's a problem because a flare's size can make the difference between small regional radio blackouts every few weeks (which can still be disruptive) or a devastating solar superstorm that would cause satellites to fall out of orbit and electrical grids to fail.


Anthropic's newest AI model shows disturbing behavior when threatened

PCWorld

If you're planning to switch AI platforms, you might want to be a little extra careful about the information you share with AI. Anthropic recently launched two new AI models in the Claude 4 series, but one of them--Claude Opus 4--exhibited some worrying behavior when it was threatened to be replaced, reports TechCrunch. During safety testing, Claude Opus 4 began blackmailing engineers who wanted to replace or switch off the AI model. In one of the tests, Claude Opus 4 was tasked with pretending to be an assistant at a fictitious company and to consider the long-term consequences of its behavior. The AI model was then given access to fictitious emails, which revealed that the company was planning to replace Claude Opus 4, and that the engineer responsible for the decision was having an affair.


The Download: meet Cathy Tie, and Anthropic's new AI models

MIT Technology Review

Since the Chinese biophysicist He Jiankui was released from prison in 2022, he has sought to make a scientific comeback and to repair his reputation after a three-year incarceration for illegally creating the world's first gene-edited children. One area of visible success on his come-back trail has been his X.com account. Over the past few years, his account has evolved from sharing mundane images of his daily life to spreading outrageous, antagonistic messages. This has left observers unsure what to take seriously. Last month, in reply to MIT Technology Review's questions about who was responsible for the account's transformation into a font of clever memes, He emailed us back: "It's thanks to Cathy Tie." Tie is no stranger to the public spotlight.


Meta introduces Llama 4 with two new AI models available now, and two more on the way

Engadget

Meta has released the first two models from its multimodal Llama 4 suite: LLama 4 Scout and Llama 4 Maverick. Maverick is "the workhorse" of the two and excels at image and text understanding for "general assistant and chat use cases," the company said in a blog post, while the smaller model Scout could tackle things like "multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast codebases." The company also introduced Llama 4 Behemoth, an upcoming model it says is "among the world's smartest LLMs" -- and CEO Mark Zuckerberg said we'll be hearing about a fourth model, LLama 4 Reasoning, "in the next month." Both Maverick and Scout are available to download now from the LLama website and Hugging Face, and they've been added to Meta AI, including for WhatsApp, Messenger and Instagram DMs. Scout has 17 billion active parameters with 16 experts, Meta says.


The Download: what's next for Neuralink, and Meta's language translation AI

MIT Technology Review

In November, a young man named Noland Arbaugh announced he'd be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom. The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musk's brain-interface company. Arbaugh's livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore people's daily ability to roam the web and play games, giving them what the company has called "digital freedom." But this is not yet a commercial product.


Meta's new AI model can translate speech from more than 100 languages

MIT Technology Review

"Meta has done a great job having a breadth of different things they support, like text-to-speech, speech-to-text, even automatic speech recognition," says Chetan Jaiswal, a professor of computer science at Quinnipiac University, who was not involved in the research. "The mere number of languages they are supporting is a tremendous achievement." Human translators are still a vital part of the translation process, the researchers say in the paper, because they can grapple with diverse cultural contexts and make sure the same meaning is conveyed from one language into another. This step is important, says Lynne Bowker, Canada Research Chair in Translation, Technologies and Society at Université Laval in Quebec, who didn't work on Seamless. "Languages are a reflection of cultures, and cultures have their own ways of knowing things," she says.


Google DeepMind's new AI model is the best yet at weather forecasting

MIT Technology Review

Google DeepMind isn't the only big tech firm that is applying AI to weather forecasting. And in 2023 Huawei developed its Pangu-Weather model, which trained on 39 years of data. It produces deterministic forecasts--those providing a single number rather than a range, like a prediction that tomorrow will have a temperature of 30 F or 0.7 inches of rainfall. GenCast differs from Pangu-Weather in that it produces probabilistic forecasts--likelihoods for various weather outcomes rather than precise predictions. For example, the forecast might be "There is a 40% chance of the temperature hitting a low of 30 F" or "There is a 60% chance of 0.7 inches of rainfall tomorrow."


Nvidia's new AI model can create 'unheard sounds' like never before

PCWorld

Nvidia has been instrumental in the current AI boom that's going on, but primarily as the manufacturer of GPUs that power all the next-gen AI processing tasks. They've gone ahead and joined in the fray with their own AI model that does something truly novel. Reported by Ars Technica, Nvidia's new AI model is called Fugatto and it combines new AI training methods and technologies to transform music, voices, and other sounds in ways that have never been done before, to create soundscapes never before experienced. Fugatto is based on an advanced AI architecture with 2.5 billion parameters, trained on over 50,000 hours of annotated audio data. The model uses a technique called Composable ART (Audio Representation Transformation), which can combine and control different sound properties based on text or audio prompts.


ByteDance will reportedly use Huawei chips to train a new AI model

Engadget

As first reported by Reuters, ByteDance, the Chinese parent company of TikTok, is planning to train and develop an AI model using chips from fellow Chinese company Huawei. Three anonymous sources approached Reuters with this information; a fourth source couldn't confirm that ByteDance was using Huawei chips but did say that a new AI model was in development. Previously, ByteDance's AI projects used NVIDIA's H20 AI chips, which were designed for the Chinese market and avoided the trade restrictions the US government placed in 2022. Chinese customers were only allowed to purchase select models of AI chips, which was an attempt to slow down Chinese technological advancement. ByteDance has ordered 100,000 Ascend 910B chips from Huawei this year but only received 30,000 of them.


OpenAI Announces a New AI Model That Solves Difficult Problems Step by Step

WIRED

OpenAI made the last big breakthrough in artificial intelligence by increasing the size of its models to dizzying proportions, when it introduced GPT-4 last year. The company today announced a new advance that signals a shift in approach--a model that can "reason" logically through many difficult problems and is significantly smarter than existing AI without a major scale-up. The new model, dubbed OpenAI-o1, can solve problems that stump existing AI models, including OpenAI's most powerful existing model, GPT-4o. Rather than summon up an answer in one step, as a large language model normally does, it reasons through the problem, effectively thinking out loud as a person might, before arriving at the right result. "This is what we consider the new paradigm in these models," Mira Murati, OpenAI's chief technology officer, tells WIRED.